Goto

Collaborating Authors

 attack profile




Retrieval-Augmented Review Generation for Poisoning Recommender Systems

Yang, Shiyi, Li, Xinshu, Zhou, Guanglin, Wang, Chen, Xu, Xiwei, Zhu, Liming, Yao, Lina

arXiv.org Artificial Intelligence

Abstract--Recent studies have shown that recommender systems (RSs) are highly vulnerable to data poisoning attacks, where malicious actors inject fake user profiles, including a group of well-designed fake ratings, to manipulate recommendations. Due to security and privacy constraints in practice, attackers typically possess limited knowledge of the victim system and thus need to craft profiles that have transferability across black-box RSs. T o maximize the attack impact, the profiles often remains imperceptible. However, generating such high-quality profiles with the restricted resources is challenging. Some works suggest incorporating fake textual reviews to strengthen the profiles; yet, the poor quality of the reviews largely undermines the attack effectiveness and imperceptibility under the practical setting. T o tackle the above challenges, in this paper, we propose to enhance the quality of the review text by harnessing in-context learning (ICL) capabilities of multimodal foundation models. T o this end, we introduce a demonstration retrieval algorithm and a text style transfer strategy to augment the navie ICL. Specifically, we propose a novel practical attack framework named RAGAN to generate high-quality fake user profiles, which can gain insights into the robustness of RSs. The profiles are generated by a jailbreaker and collaboratively optimized on an instructional agent and a guardian to improve the attack transferability and imperceptibility. Comprehensive experiments on various real-world datasets demonstrate that RAGAN achieves the state-of-the-art poisoning attack performance. Impact Statement--Recommender systems play a vital role across e-commerce, online content, and social media platforms, benefiting both users and businesses through personalized suggestions and improved engagement. These advantages also create incentives for malicious actors to exploit them. Recent studies reveal that modern recommender systems are vulnerable to data poisoning attacks, leading to unfair competition and loss of user trust. However, existing attack methods often have limited practicality, overestimating system robustness under real-world constraints.


Review-Incorporated Model-Agnostic Profile Injection Attacks on Recommender Systems

Yang, Shiyi, Yao, Lina, Wang, Chen, Xu, Xiwei, Zhu, Liming

arXiv.org Artificial Intelligence

Recent studies have shown that recommender systems (RSs) are highly vulnerable to data poisoning attacks. Understanding attack tactics helps improve the robustness of RSs. We intend to develop efficient attack methods that use limited resources to generate high-quality fake user profiles to achieve 1) transferability among black-box RSs 2) and imperceptibility among detectors. In order to achieve these goals, we introduce textual reviews of products to enhance the generation quality of the profiles. Specifically, we propose a novel attack framework named R-Trojan, which formulates the attack objectives as an optimization problem and adopts a tailored transformer-based generative adversarial network (GAN) to solve it so that high-quality attack profiles can be produced. Comprehensive experiments on real-world datasets demonstrate that R-Trojan greatly outperforms state-of-the-art attack methods on various victim RSs under black-box settings and show its good imperceptibility.


Deep Reinforcement Learning for Cyber System Defense under Dynamic Adversarial Uncertainties

Dutta, Ashutosh, Chatterjee, Samrat, Bhattacharya, Arnab, Halappanavar, Mahantesh

arXiv.org Artificial Intelligence

Development of autonomous cyber system defense strategies and action recommendations in the real-world is challenging, and includes characterizing system state uncertainties and attack-defense dynamics. We propose a data-driven deep reinforcement learning (DRL) framework to learn proactive, context-aware, defense countermeasures that dynamically adapt to evolving adversarial behaviors while minimizing loss of cyber system operations. A dynamic defense optimization problem is formulated with multiple protective postures against different types of adversaries with varying levels of skill and persistence. A custom simulation environment was developed and experiments were devised to systematically evaluate the performance of four model-free DRL algorithms against realistic, multi-stage attack sequences. Our results suggest the efficacy of DRL algorithms for proactive cyber defense under multi-stage attack profiles and system uncertainties.


Unorganized Malicious Attacks Detection

Pang, Ming, Gao, Wei, Tao, Min, Zhou, Zhi-Hua

Neural Information Processing Systems

Recommender systems have attracted much attention during the past decade. Many attack detection algorithms have been developed for better recommendations, mostly focusing on shilling attacks, where an attack organizer produces a large number of user profiles by the same strategy to promote or demote an item. This work considers another different attack style: unorganized malicious attacks, where attackers individually utilize a small number of user profiles to attack different items without organizer. This attack style occurs in many real applications, yet relevant study remains open. We formulate the unorganized malicious attacks detection as a matrix completion problem, and propose the Unorganized Malicious Attacks detection (UMA) algorithm, based on the alternating splitting augmented Lagrangian method. We verify, both theoretically and empirically, the effectiveness of the proposed approach.


Unorganized Malicious Attacks Detection

Pang, Ming, Gao, Wei, Tao, Min, Zhou, Zhi-Hua

Neural Information Processing Systems

Recommender systems have attracted much attention during the past decade. Many attack detection algorithms have been developed for better recommendations, mostly focusing on shilling attacks, where an attack organizer produces a large number of user profiles by the same strategy to promote or demote an item. This work considers another different attack style: unorganized malicious attacks, where attackers individually utilize a small number of user profiles to attack different items without organizer. This attack style occurs in many real applications, yet relevant study remains open. We formulate the unorganized malicious attacks detection as a matrix completion problem, and propose the Unorganized Malicious Attacks detection (UMA) algorithm, based on the alternating splitting augmented Lagrangian method. We verify, both theoretically and empirically, the effectiveness of the proposed approach.


Robustness and Accuracy Tradeoffs for Recommender Systems Under Attack

Seminario, Carlos E. (University of North Carolina at Charlotte) | Wilson, David C. (University of North Carolina at Charlotte)

AAAI Conferences

Recommender systems assist users in the daunting task of sifting through large amounts of data in order to select relevant information or items. Common examples include consumer products and services, such as for songs, books, articles, etc. Unfortunately, such systems may be subject to attack by malicious users who want to manipulate the system’s recommendations to suit their needs: to promote their own (or demote a competitor’s) product/service, or to cause disruption in the recommender system. Attacks can cause the recommender system to become unreliable and untrustworthy, resulting in user dissatisfaction. Developers already face tradeoffs in system efficiency and accuracy, and designing for robustness adds an additional dimension for consideration. In this paper, we show how the underlying implementation choices for item-based and user-based Collaborative Filtering recommender systems can affect the accuracy and robustness of recommender systems. We also show how accuracy and robustness can change over a system’s lifetime by analyzing a set of temporal snapshots from system usage over time. Results provide insight into some of the tradeoffs between robustness and accuracy that operators may need to consider in development and evaluation.


Trading Robustness for Privacy in Decentralized Recommender Systems

Cheng, Zunping (University College Dublin) | Hurley, Neil (University College Dublin)

AAAI Conferences

Collaborative filtering (CF) recommender systems are very popular and successful in commercial application fields. One end-user concern is the privacy of the personal data required by such systems in order to make personalized recommendations. Recently, peer-to-peer decentralized architectures have been proposed to address this privacy issue. On the other hand system managers must be concerned about system robustness. In particular, it has been shown that recommender systems are vulnerable to profile injection, although model-based CF algorithms show greater stability against malicious attacks that have been studied in the state-of-the-art. In this paper we generalize the generic model for decentralized recommendation and discuss the trade-off between robustness and privacy. In this context, we argue that exposing knowledge of the model parameters allows new, highly effective, model-based attack strategies to be considered. We conclude that the security concerns of privacy and robustness stand in opposition to each other and are difficult to satisfy simultaneously.